Santa Clara
- Asia > Middle East > Iran > Tehran Province > Tehran (0.04)
- North America > United States > Iowa > Story County > Ames (0.04)
- North America > United States > California > Santa Clara County > Santa Clara (0.04)
- Europe > Greece (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- South America > Paraguay > Asunción > Asunción (0.04)
- North America > United States > California > Santa Clara County > Santa Clara (0.04)
- Asia > Middle East > Jordan (0.04)
- Asia > Taiwan (0.04)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- North America > United States > California > Santa Clara County > Santa Clara (0.04)
- (2 more...)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.68)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.34)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.34)
- North America > United States > California > Santa Clara County > Santa Clara (0.04)
- Europe > United Kingdom > Scotland > City of Edinburgh > Edinburgh (0.04)
- Research Report > New Finding (0.65)
- Research Report > Experimental Study (0.41)
Recommending Composite Items Using Multi-Level Preference Information: A Joint Interaction Modeling Approach
Bi, Xuan, Wang, Yaqiong, Adomavicius, Gediminas, Curley, Shawn
Recommender systems have become ubiquitous across a wide range of fields, such as ecommerce, media consumption (including movies, books, music, news, etc.), social networks, finance, and many others, due to their effectiveness in identifying relevant items or content among numerous choices [1, 2]. Traditionally, recommender systems, largely based on collaborative filtering techniques, have focused on recommending individual (or "atomic") items, such as movies or books, by understanding users' preferences for these individual items. However, in certain application domains, recommending "composite" items (i.e., combinations of atomic items) represents a very important capability. For illustration, consider a clothing/fashion recommender system, where we want to recommend "outfits" - combinations of tops (t-shirts, shirts, sweaters) and bottoms (pants, skirts, shorts) - to users. In such a case, multiple fashion items in a recommended outfit ideally have to match both functionally and stylistically, which may require domain expertise (e.g., on things like style compatibility) beyond individual preferences. Another key challenge for such recommender systems is that a given user's personal preference for a composite item may not directly translate to the user's personal preferences for the underlying atomic items and vice versa.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.28)
- North America > United States > Michigan (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > California > Santa Clara County > Santa Clara (0.04)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.67)
- Leisure & Entertainment (0.67)
- Information Technology > Services > e-Commerce Services (0.34)
Towards Efficient Real-Time Video Motion Transfer via Generative Time Series Modeling
Haque, Tasmiah, Syed, Md. Asif Bin, Jeong, Byungheon, Bai, Xue, Mohan, Sumit, Paul, Somdyuti, Ahmed, Imtiaz, Das, Srinjoy
Motion Transfer is a technique that synthesizes videos by transferring motion dynamics from a driving video to a source image. In this work we propose a deep learning-based framework to enable real-time video motion transfer which is critical for enabling bandwidth-efficient applications such as video conferencing, remote health monitoring, virtual reality interaction, and vision-based anomaly detection. This is done using keypoints which serve as semantically meaningful, compact representations of motion across time. To enable bandwidth savings during video transmission we perform forecasting of keypoints using two generative time series models VRNN and GRU-NF. The predicted keypoints are transformed into realistic video frames using an optical flow-based module paired with a generator network, thereby enabling efficient, low-frame-rate video transmission. Based on the application this allows the framework to either generate a deterministic future sequence or sample a diverse set of plausible futures. Experimental results demonstrate that VRNN achieves the best point-forecast fidelity (lowest MAE) in applications requiring stable and accurate multi-step forecasting and is particularly competitive in higher-uncertainty, multi-modal settings. This is achieved by introducing recurrently conditioned stochastic latent variables that carry past contexts to capture uncertainty and temporal variation. On the other hand the GRU-NF model enables richer diversity of generated videos while maintaining high visual quality. This is realized by learning an invertible, exact-likelihood mapping between the keypoints and their latent representations which supports rich and controllable sampling of diverse yet coherent keypoint sequences. Our work lays the foundation for next-generation AI systems that require real-time, bandwidth-efficient, and semantically controllable video generation.
- North America > United States > West Virginia > Monongalia County > Morgantown (0.04)
- Asia > India > West Bengal > Kharagpur (0.04)
- South America > Brazil (0.04)
- (5 more...)
Trump clears way for sale of powerful Nvidia H200 chips to China
What are the implications of Trump's Somali'garbage' comments? What happens if the US attacks Venezuela? Does'America First' make the US weaker? What we know about the DC pipe bomb suspect Brian Cole Jr. US President Donald Trump has cleared the way for tech giant Nvidia to sell its advanced H200 chip to China, in a significant easing of Washington's export controls targeting Chinese tech. Trump said on Monday that he had informed Chinese President Xi Jinping of the decision to allow the export of the chip under an arrangement that will see 25 percent of sales paid to the US government.
- Asia > China (1.00)
- Europe (0.52)
- North America > Central America (0.41)
- (7 more...)
The MICCAI Federated Tumor Segmentation (FeTS) Challenge 2024: Efficient and Robust Aggregation Methods for Federated Learning
Linardos, Akis, Pati, Sarthak, Baid, Ujjwal, Edwards, Brandon, Foley, Patrick, Ta, Kevin, Chung, Verena, Sheller, Micah, Khan, Muhammad Irfan, Jafaritadi, Mojtaba, Kontio, Elina, Khan, Suleiman, Mächler, Leon, Ezhov, Ivan, Shit, Suprosanna, Paetzold, Johannes C., Grimberg, Gustav, Nickel, Manuel A., Naccache, David, Siomos, Vasilis, Passerat-Palmbach, Jonathan, Tarroni, Giacomo, Kim, Daewoon, Klausmann, Leonard L., Shah, Prashant, Menze, Bjoern, Makris, Dimitrios, Bakas, Spyridon
We present the design and results of the MICCAI Federated Tumor Segmentation (FeTS) Challenge 2024, which focuses on federated learning (FL) for glioma sub-region segmentation in multi-parametric MRI and evaluates new weight aggregation methods aimed at improving robustness and efficiency. Six participating teams were evaluated using a standardized FL setup and a multi-institutional dataset derived from the BraTS glioma benchmark, consisting of 1,251 training cases, 219 validation cases, and 570 hidden test cases with segmentations for enhancing tumor (ET), tumor core (TC), and whole tumor (WT). Teams were ranked using a cumulative scoring system that considered both segmentation performance, measured by Dice Similarity Coefficient (DSC) and the 95th percentile Hausdorff Distance (HD95), and communication efficiency assessed through the convergence score. A PID-controller-based method achieved the top overall ranking, obtaining mean DSC values of 0.733, 0.761, and 0.751 for ET, TC, and WT, respectively, with corresponding HD95 values of 33.922 mm, 33.623 mm, and 32.309 mm, while also demonstrating the highest communication efficiency with a convergence score of 0.764. These findings advance the state of federated learning for medical imaging, surpassing top-performing methods from previous challenge iterations and highlighting PID controllers as effective mechanisms for stabilizing and optimizing weight aggregation in FL. The challenge code is available at https://github.com/FeTS-AI/Challenge.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > Indiana > Marion County > Indianapolis (0.04)
- Europe > Slovenia > Drava > Municipality of Benedikt > Benedikt (0.04)
- (17 more...)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- (2 more...)
LabOS: The AI-XR Co-Scientist That Sees and Works With Humans
Cong, Le, Smerkous, David, Wang, Xiaotong, Yin, Di, Zhang, Zaixi, Jin, Ruofan, Wang, Yinkai, Gerasimiuk, Michal, Dinesh, Ravi K., Smerkous, Alex, Shi, Lihan, Zheng, Joy, Lam, Ian, Wu, Xuekun, Liu, Shilong, Li, Peishan, Zhu, Yi, Zhao, Ning, Parakh, Meenal, Serrao, Simran, Mohammad, Imran A., Chen, Chao-Yeh, Xie, Xiufeng, Chen, Tiffany, Weinstein, David, Barbone, Greg, Caglar, Belgin, Sunwoo, John B., Li, Fuxin, Deng, Jia, Wu, Joseph C., Wu, Sanfeng, Wang, Mengdi
Modern science advances fastest when thought meets action. LabOS represents the first AI co-scientist that unites computational reasoning with physical experimentation through multimodal perception, self-evolving agents, and Extended-Reality(XR)-enabled human-AI collaboration. By connecting multi-model AI agents, smart glasses, and robots, LabOS allows AI to see what scientists see, understand experimental context, and assist in real-time execution. Across applications -- from cancer immunotherapy target discovery to stem-cell engineering and material science -- LabOS shows that AI can move beyond computational design to participation, turning the laboratory into an intelligent, collaborative environment where human and machine discovery evolve together.
- North America > United States > Washington > King County > Seattle (0.14)
- North America > United States > California > Santa Clara County > Stanford (0.05)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- (3 more...)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Pharmaceuticals & Biotechnology (1.00)
- Health & Medicine > Therapeutic Area > Hematology > Stem Cells (0.35)
LMCache: An Efficient KV Cache Layer for Enterprise-Scale LLM Inference
Liu, Yuhan, Cheng, Yihua, Yao, Jiayi, An, Yuwei, Chen, Xiaokun, Feng, Shaoting, Huang, Yuyang, Shen, Samuel, Zhang, Rui, Du, Kuntai, Jiang, Junchen
KV cache has traditionally been stored in GPU memory to accelerate the decoding phase of large language model (LLM) inference. However, it is increasingly necessary to move KV caches outside GPU devices, to enable cache reuse across different queries and inference engines. Our real-world usage statistics confirm this trend: over time, the total KV cache stored by users has grown rapidly, far exceeding the capacity of GPU memory. Despite this need, there lacks an efficient solution for offloading and transferring KV caches. We present LMCACHE, the first and so far the most efficient open-source KV caching solution, which extracts and stores KV caches generated by modern LLM engines (vLLM and SGLang) out of the GPU memory and shares them across engines and queries. LMCACHE supports both cache offloading (prefix reuse across queries) and prefill-decode (PD) disaggregation (cross-engine/GPU cache transfer). LMCACHE's high performance and wide adoption stem from the following contributions: (1) highly optimized KV cache data movement powered by batched data movement operations, compute and I/O pipelining; (2) a modular KV cache connector component, decoupling LMCACHE from the rapid evolution of inference engines; (3) a first-class control API for flexible cache orchestration across GPU, CPU, storage, and network layers. Our evaluation shows that combining LMCACHE with vLLM achieves up to 15x improvement in throughput across workloads such as multi-round question answering and document analysis. Large-scale adoption of LMCACHE in enterprise settings provides us valuable insights, for example, fetching KV cache from remote storage has unsurprisingly benefits to prefill delay, and that context truncation, which is a widely applied technique in industry, can greatly reduce prefix cache hit ratio by half. The source code of LMCACHE is at: https://github.com/LMCache/LMCache.
- North America > United States > California > Santa Clara County > Santa Clara (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > Colorado (0.04)
- (2 more...)